Goto

Collaborating Authors

 streaming radiance field


Streaming Radiance Fields for 3D Video Synthesis

Neural Information Processing Systems

We present an explicit-grid based method for efficiently reconstructing streaming radiance fields for novel view synthesis of real world dynamic scenes. Instead of training a single model that combines all the frames, we formulate the dynamic modeling problem with an incremental learning paradigm in which per-frame model difference is trained to complement the adaption of a base model on the current frame. By exploiting the simple yet effective tuning strategy with narrow bands, the proposed method realizes a feasible framework for handling video sequences on-the-fly with high training efficiency. The storage overhead induced by using explicit grid representations can be significantly reduced through the use of model difference based compression. We also introduce an efficient strategy to further accelerate model optimization for each frame. Experiments on challenging video sequences demonstrate that our approach is capable of achieving a training speed of 15 seconds per-frame with competitive rendering quality, which attains $1000 \times$ speedup over the state-of-the-art implicit methods.


Streaming Radiance Fields for 3D Video Synthesis Supplemental Material 1 Meet Room dataset We capture the Meet Room dataset by using 13 Azure Kinect DK [

Neural Information Processing Systems

We use 3.5mm audio cables to form a daisy chain topology for shutter We turn off depth cameras to downscale USB bandwidth. Figure 4 provides more examples in the Meet Room dataset. All numbers are in MB. We report the average size of all frames.


Streaming Radiance Fields for 3D Video Synthesis Supplemental Material 1 Meet Room dataset We capture the Meet Room dataset by using 13 Azure Kinect DK [

Neural Information Processing Systems

We use 3.5mm audio cables to form a daisy chain topology for shutter We turn off depth cameras to downscale USB bandwidth. Figure 4 provides more examples in the Meet Room dataset. All numbers are in MB. We report the average size of all frames.


Streaming Radiance Fields for 3D Video Synthesis

Neural Information Processing Systems

We present an explicit-grid based method for efficiently reconstructing streaming radiance fields for novel view synthesis of real world dynamic scenes. Instead of training a single model that combines all the frames, we formulate the dynamic modeling problem with an incremental learning paradigm in which per-frame model difference is trained to complement the adaption of a base model on the current frame. By exploiting the simple yet effective tuning strategy with narrow bands, the proposed method realizes a feasible framework for handling video sequences on-the-fly with high training efficiency. The storage overhead induced by using explicit grid representations can be significantly reduced through the use of model difference based compression. We also introduce an efficient strategy to further accelerate model optimization for each frame.